# Long Context Reasoning
Qwen2.5 Math 7B RoPE 300k
Apache-2.0
Qwen2.5-Math-7B-RoPE-300k is a variant based on Qwen2.5-Math-7B, which extends the context length to 32k tokens by adjusting the base frequency of Rotary Position Encoding (RoPE).
Large Language Model
Transformers English

Q
open-r1
4,528
2
Gemma 3 R1984 4B
Gemma3-R1984-4B is a powerful agent AI platform built upon Google's Gemma-3-4B model, supporting multimodal file processing and deep research capabilities.
Image-to-Text
Transformers Supports Multiple Languages

G
ginipick
44
4
Codellama 70b Hf
Code Llama is a series of code generation and understanding models by Meta, ranging from 7 billion to 70 billion parameters. This model is the 70 billion parameter base version.
Large Language Model
Transformers Other

C
meta-llama
184
24
Featured Recommended AI Models